This letter focuses on the task of Multi-Target Multi-Camera vehicle tracking. We propose to associate single-camera trajectories into multi-camera global trajectories by training a Graph Convolutional Network. Our approach simultaneously processes all cameras providing a global solution, and it is also robust to large cameras unsynchronizations. Furthermore, we design a new loss function to deal with class imbalance. Our proposal outperforms the related work showing better generalization and without requiring ad-hoc manual annotations or thresholds, unlike compared approaches.
translated by 谷歌翻译
数码相机的加速使用引起了人们对隐私和安全性的日益关注,尤其是在诸如行动识别之类的应用程序中。在本文中,我们提出了一个优化框架,以沿着人类行动识别管道提供强大的视觉隐私保护。我们的框架参数化了相机镜头,以成功地降低视频的质量,以抑制隐私属性并防止对抗性攻击,同时保持相关功能以进行活动识别。我们通过广泛的模拟和硬件实验来验证我们的方法。
translated by 谷歌翻译
为N($ ^ 4 $ s)+ o $ _呈现和定量测试了一种用于预测来自特定初始状态(状态为分布或STD)的产品状态分布的机器学习(ML)模型。 {2} $(x $ ^ 3 \ sigma _ {\ rm g} ^ { - } $)$ \ lightarrow $ no(x $ ^ 2 \ pi $)+ o($ ^ 3 $ p)反应。用于训练神经网络(NN)的参考数据集由用于$ \ SIM 2000 $初始条件的显式准古典轨迹(QCT)模拟确定的最终状态分布。总体而言,通过根均方平方差价量化的预测精度$(\ SIM 0.003)$和$ r ^ 2 $ $(\ SIM 0.99)$之间的参考QCT和STD模型的预测很高测试集和离网状态特定的初始条件和从反应性状态分布中汲取的初始条件,其特征在于通过平移,旋转和振动温度。与在相同的初始状态分布上评估的更粗糙的粒度分布 - 分布(DTD)模型相比,STD模型表明了在反应物制剂中的状态分辨率的额外益处具有相当的性能。从特定的初始状态开始,还导致更多样化的最终状态分布,需要更具表现力的神经网络与DTD相比。显式QCT模拟之间的直接比较,STD模型和广泛使用的Larsen-Borgnakke(LB)模型表明,STD模型是定量的,而LB模型最适合旋转分布$ P(J')$和失败振动分布$ p(v')$。因此,STD模型可以非常适合模拟非预测高速流,例如,使用直接仿真蒙特卡罗方法。
translated by 谷歌翻译
Uncertainty quantification is crucial to inverse problems, as it could provide decision-makers with valuable information about the inversion results. For example, seismic inversion is a notoriously ill-posed inverse problem due to the band-limited and noisy nature of seismic data. It is therefore of paramount importance to quantify the uncertainties associated to the inversion process to ease the subsequent interpretation and decision making processes. Within this framework of reference, sampling from a target posterior provides a fundamental approach to quantifying the uncertainty in seismic inversion. However, selecting appropriate prior information in a probabilistic inversion is crucial, yet non-trivial, as it influences the ability of a sampling-based inference in providing geological realism in the posterior samples. To overcome such limitations, we present a regularized variational inference framework that performs posterior inference by implicitly regularizing the Kullback-Leibler divergence loss with a CNN-based denoiser by means of the Plug-and-Play methods. We call this new algorithm Plug-and-Play Stein Variational Gradient Descent (PnP-SVGD) and demonstrate its ability in producing high-resolution, trustworthy samples representative of the subsurface structures, which we argue could be used for post-inference tasks such as reservoir modelling and history matching. To validate the proposed method, numerical tests are performed on both synthetic and field post-stack seismic data.
translated by 谷歌翻译
Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preference than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
translated by 谷歌翻译
The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
The library scikit-fda is a Python package for Functional Data Analysis (FDA). It provides a comprehensive set of tools for representation, preprocessing, and exploratory analysis of functional data. The library is built upon and integrated in Python's scientific ecosystem. In particular, it conforms to the scikit-learn application programming interface so as to take advantage of the functionality for machine learning provided by this package: pipelines, model selection, and hyperparameter tuning, among others. The scikit-fda package has been released as free and open-source software under a 3-Clause BSD license and is open to contributions from the FDA community. The library's extensive documentation includes step-by-step tutorials and detailed examples of use.
translated by 谷歌翻译
我们提出了Sauron,这是一种过滤器修剪方法,它通过使用自动调整的层特异性阈值丢弃相应的过滤器来消除冗余特征图。此外,Sauron最大程度地减少了一个正规化术语,正如我们所显示的各种指标所显示的那样,促进了特征地图簇的形成。与大多数过滤器修剪方法相反,Sauron是单相,类似于典型的神经网络优化,需要更少的超参数和设计决策。此外,与其他基于群集的方法不同,我们的方法不需要预选簇的数量,而簇的数量是非平凡的,以确定和随着层的变化。我们在三个医学图像分割任务上评估了Sauron和三种最先进的过滤器修剪方法。在这个领域,过滤器修剪很少受到关注,并且可以帮助建立有效的医疗级计算机模型,这些计算机由于隐私考虑而无法使用云服务。索伦(Sauron)比竞争的修剪方法实现了具有更高性能和修剪率的模型。此外,由于Sauron在训练过程中除去过滤器,因此随着时间的推移,其优化加速了。最后,我们证明了Sauron-Prun的模型的特征地图是高度可解释的。 Sauron代码可在https://github.com/jmlipman/sauronunet上公开获得。
translated by 谷歌翻译
安全可靠的自主驾驶堆栈(AD)的设计是我们时代最具挑战性的任务之一。预计这些广告将在具有完全自主权的高度动态环境中驱动,并且比人类更大的可靠性。从这个意义上讲,要高效,安全地浏览任意复杂的流量情景,广告必须具有预测周围参与者的未来轨迹的能力。当前的最新模型通常基于复发,图形和卷积网络,在车辆预测的背景下取得了明显的结果。在本文中,我们探讨了在生成模型进行运动预测中注意力的影响,考虑到物理和社会环境以计算最合理的轨迹。我们首先使用LSTM网络对过去的轨迹进行编码,该网络是计算社会背景的多头自我发言模块的输入。另一方面,我们制定了一个加权插值来计算最后一个观测框中的速度和方向,以便计算可接受的目标点,从HDMAP信息的可驱动的HDMAP信息中提取,这代表了我们的物理环境。最后,我们的发电机的输入是从多元正态分布采样的白噪声矢量,而社会和物理环境则是其条件,以预测可行的轨迹。我们使用Argoverse运动预测基准1.1验证我们的方法,从而实现竞争性的单峰结果。
translated by 谷歌翻译
社会互动网络是建立文明的基材。通常,我们与我们喜欢的人建立新的纽带,或者认为通过第三方的干预,我们的关系损害了。尽管它们的重要性和这些过程对我们的生活产生的巨大影响,但对它们的定量科学理解仍处于起步阶段,这主要是由于很难收集大量的社交网络数据集,包括个人属性。在这项工作中,我们对13所学校的真实社交网络进行了彻底的研究,其中3,000多名学生和60,000名宣布正面关系和负面关系,包括对所有学生的个人特征的测试。我们引入了一个度量标准 - “三合会影响”,该指标衡量了最近的邻居在其接触关系中的影响。我们使用神经网络来预测关系,并根据他们的个人属性或三合会的影响来提取两个学生是朋友或敌人的可能性。或者,我们可以使用网络结构的高维嵌入来预测关系。值得注意的是,三合会影响(一个简单的一维度量)在预测两个学生之间的关系方面达到了最高的准确性。我们假设从神经网络中提取的概率 - 三合会影响的功能和学生的个性 - 控制真实社交网络的演变,为这些系统的定量研究开辟了新的途径。
translated by 谷歌翻译